18 research outputs found

    TDCMR: Triplet-Based Deep Cross-Modal Retrieval for geo-multimedia data

    Get PDF
    Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network and an enhanced triplet constraint to capture high-level semantics. Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. Extensive experiments are conducted on three common used benchmarks, and the results show the superior performance of the proposed method

    Diagnosis of Schistosoma infection in non-human animal hosts: A systematic review and meta-analysis

    Get PDF
    Background: Reliable and field-applicable diagnosis of schistosome infections in non-human animals is important for surveillance, control, and verification of interruption of human schistosomiasis transmission. This study aimed to summarize uses of available diagnostic techniques through a systematic review and meta-analysis. Methodology and principal findings: We systematically searched the literature and reports comparing two or more diagnostic tests in non-human animals for schistosome infection. Out of 4,909 articles and reports screened, 19 met our inclusion criteria, four of which were considered in the meta-analysis. A total of 14 techniques (parasitologic, immunologic, and molecular) and nine types of non-human animals were involved in the studies. Notably, four studies compared parasitologic tests (miracidium hatching test (MHT), Kato-Katz (KK), the Danish Bilharziasis Laboratory technique (DBL), and formalin-ethyl acetate sedimentation-digestion (FEA-SD)) with quantitative polymerase chain reaction (qPCR), and sensitivity estimates (using qPCR as the reference) were extracted and included in the meta-analyses, showing significant heterogeneity across studies and animal hosts. The pooled estimate of sensitivity was 0.21 (95% confidence interval (CI): 0.03–0.48) with FEA-SD showing highest sensitivity (0.89, 95% CI: 0.65–1.00). Conclusions/significance: Our findings suggest that the parasitologic technique FEA-SD and the molecular technique qPCR are the most promising techniques for schistosome diagnosis in non-human animal hosts. Future studies are needed for validation and standardization of the techniques for real-world field applications

    Graph Representation-Based Deep Multi-View Semantic Similarity Learning Model for Recommendation

    No full text
    With the rapid development of Internet technology, how to mine and analyze massive amounts of network information to provide users with accurate and fast recommendation information has become a hot and difficult topic of joint research in industry and academia in recent years. One of the most widely used social network recommendation methods is collaborative filtering. However, traditional social network-based collaborative filtering algorithms will encounter problems such as low recommendation performance and cold start due to high data sparsity and uneven distribution. In addition, these collaborative filtering algorithms do not effectively consider the implicit trust relationship between users. To this end, this paper proposes a collaborative filtering recommendation algorithm based on graphsage (GraphSAGE-CF). The algorithm first uses graphsage to learn low-dimensional feature representations of the global and local structures of user nodes in social networks and then calculates the implicit trust relationship between users through the feature representations learned by graphsage. Finally, the comprehensive evaluation shows the scores of users and implicit users on related items and predicts the scores of users on target items. Experimental results on four open standard datasets show that our proposed graphsage-cf algorithm is superior to existing algorithms in RMSE and MAE

    Using CNN with Multi-Level Information Fusion for Image Denoising

    No full text
    Deep convolutional neural networks (CNN) with hierarchical architectures have obtained good results for image denoising. However, in some cases where the noise level is unknown and the image background is complex, it is challenging to obtain robust information through CNN. In this paper, we present a multi-level information fusion CNN (MLIFCNN) in image denoising containing a fine information extraction block (FIEB), a multi-level information interaction block (MIIB), a coarse information refinement block (CIRB), and a reconstruction block (RB). In order to adapt to more complex image backgrounds, FIEB uses parallel group convolution to extract wide-channel information. To enhance the robustness of the obtained information, a MIIB uses residual operations to act in two sub-networks for implementing the interaction of wide and deep information to adapt to the distribution of different noise levels. To enhance the stability of the training denoiser, CIRB stacks common and group convolutions to refine the obtained information. Finally, RB uses a residual operation to act in a single convolution in order to obtain the resultant clean image. Experimental results show that our method is better than many other excellent methods, both in terms of quantitative and qualitative aspects

    A Dual CNN for Image Super-Resolution

    No full text
    High-quality images have an important effect on high-level tasks. However, due to human factors and camera hardware, digital devices collect low-resolution images. Deep networks can effectively restore these damaged images via their strong learning abilities. However, most of these networks depended on deeper architectures to enhance clarities of predicted images, where single features cannot deal well with complex screens. In this paper, we propose a dual super-resolution CNN (DSRCNN) to obtain high-quality images. DSRCNN relies on two sub-networks to extract complementary low-frequency features to enhance the learning ability of the SR network. To prevent a long-term dependency problem, a combination of convolutions and residual learning operation is embedded into dual sub-networks. To prevent information loss of an original image, an enhanced block is used to gather original information and obtained high-frequency information of a deeper layer via sub-pixel convolutions. To obtain more high-frequency features, a feature learning block is used to learn more details of high-frequency information. The proposed method is very suitable for complex scenes for image resolution. Experimental results show that the proposed DSRCNN is superior to other popular in SR networks. For instance, our DSRCNN has obtained improvement of 0.08 dB than that of MemNet on Set5 for ×3

    A Dual CNN for Image Super-Resolution

    No full text
    High-quality images have an important effect on high-level tasks. However, due to human factors and camera hardware, digital devices collect low-resolution images. Deep networks can effectively restore these damaged images via their strong learning abilities. However, most of these networks depended on deeper architectures to enhance clarities of predicted images, where single features cannot deal well with complex screens. In this paper, we propose a dual super-resolution CNN (DSRCNN) to obtain high-quality images. DSRCNN relies on two sub-networks to extract complementary low-frequency features to enhance the learning ability of the SR network. To prevent a long-term dependency problem, a combination of convolutions and residual learning operation is embedded into dual sub-networks. To prevent information loss of an original image, an enhanced block is used to gather original information and obtained high-frequency information of a deeper layer via sub-pixel convolutions. To obtain more high-frequency features, a feature learning block is used to learn more details of high-frequency information. The proposed method is very suitable for complex scenes for image resolution. Experimental results show that the proposed DSRCNN is superior to other popular in SR networks. For instance, our DSRCNN has obtained improvement of 0.08 dB than that of MemNet on Set5 for ×3

    Effects of ultrasonic and steam-cooking treatments on the physicochemical properties of bamboo shoots protein and the stability of O/W emulsion

    No full text
    In this study, the effects of ultrasonic and steam-cooking treatments on the physicochemical and emulsifying properties of bamboo shoots protein (BSP) were investigated. The particle size and the polydispersity index (PDI) of U-BSP (ultrasonic-BSP) both decreased. Fourier transform infrared spectroscopy (FTIR) showed that the secondary structure of U-BSP was more loose. Furthermore, X-ray diffraction (XRD) and thermogravimetric (TGA) analysis suggested that crystallinity amd thermal stability of U-BSP both deceased. The water and oil holding capacity (WHC/OHC) of U-BSP increased, while steam-cooking treatment had the reverse effect. We also investigated the effects of ultrasonic and steam-cooking treatments on BSP-stabilized emulsions. The viscosity of emulsion stabilized by U-BSP increased and the distribution of emulsion droplets was more uniform and smaller. The results showed that ultrasonic treatment significantly improved the stability of BSP-stabilized emulsions, while steam-cooking treatment had a significant negative impact on the stability of BSP-stabilized emulsions. The work indicated ultrasonication is an effective treatment to improve the emulsifying properties of BSP

    Surveillance systems for neglected tropical diseases: global lessons from China’s evolving schistosomiasis reporting systems, 1949–2014

    No full text
    Though it has been a focus of the country’s public health surveillance systems since the 1950s, schistosomiasis represents an ongoing public health challenge in China. Parallel, schistosomiasis-specific surveillance systems have been essential to China’s decades-long campaign to reduce the prevalence of the disease, and have contributed to the successful elimination in five of China’s twelve historically endemic provinces, and to the achievement of morbidity and transmission control in the other seven. More recently, an ambitious goal of achieving nation-wide transmission interruption by 2020 has been proposed. This paper details how schistosomiasis surveillance systems have been structured and restructured within China’s evolving public health system, and how parallel surveillance activities have provided an information system that has been integral to the characterization of, response to, and control of the disease. With the ongoing threat of re-emergence of schistosomiasis in areas previously considered to have achieved transmission control, a critical examination of China’s current surveillance capabilities is needed to direct future investments in health information systems and to enable improved coordination between systems in support of ongoing control. Lessons drawn from China’s experience are applied to the current global movement to reduce the burden of helminthiases, where surveillance capacity based on improved diagnostics is urgently needed
    corecore